Approximate maximum entropy principles via Goemans-Williamson with applications to provable variational methods

نویسندگان

  • Andrej Risteski
  • Yuanzhi Li
چکیده

The well known maximum-entropy principle due to Jaynes, which states that given mean parameters, the maximum entropy distribution matching them is in an exponential family has been very popular in machine learning due to its “Occam’s razor” interpretation. Unfortunately, calculating the potentials in the maximumentropy distribution is intractable [BGS14]. We provide computationally efficient versions of this principle when the mean parameters are pairwise moments: we design distributions that approximately match given pairwise moments, while having entropy which is comparable to the maximum entropy distribution matching those moments. We additionally provide surprising applications of the approximate maximum entropy principle to designing provable variational methods for partition function calculations for Ising models without any assumptions on the potentials of the model. More precisely, we show that we can get approximation guarantees for the log-partition function comparable to those in the low-temperature limit, which is the setting of optimization of quadratic forms over the hypercube. ([AN06])

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Approximating quadratic programming with bound constraints

We consider the problem of approximating the global maximum of a quadratic program (QP) with n variables subject to bound constraints. Based on the results of Goemans and Williamson 4] and Nesterov 6], we show that a 4=7 approximate solution can be obtained in polynomial time.

متن کامل

Max - cut Problem

Max-cut problem is one of many NP-hard graph theory problems which attracted many researchers over the years. Though there is almost no hope in finding a polynomialtime algorithm for max-cut problem, various heuristics, or combination of optimization and heuristic methods have been developed to solve this problem. Among them is the efficient algorithm of Goemans and Williamson. Their algorithm ...

متن کامل

Sequential Optimality Conditions and Variational Inequalities

In recent years, sequential optimality conditions are frequently used for convergence of iterative methods to solve nonlinear constrained optimization problems. The sequential optimality conditions do not require any of the constraint qualications. In this paper, We present the necessary sequential complementary approximate Karush Kuhn Tucker (CAKKT) condition for a point to be a solution of a ...

متن کامل

Regularization vs. Relaxation: A conic optimization perspective of statistical variable selection

Variable selection is a fundamental task in statistical data analysis. Sparsity-inducing regularization methods are a popular class of methods that simultaneously perform variable selection and model estimation. The central problem is a quadratic optimization problem with an `0-norm penalty. Exactly enforcing the `0-norm penalty is computationally intractable for larger scale problems, so diffe...

متن کامل

A Fast, Adaptive Variant of the Goemans-Williamson Scheme for the Prize-Collecting Steiner Tree Problem

We introduce a new variant of the Goemans-Williamson (GW) scheme for the Prize-Collecting Steiner Tree Problem (PCST). Motivated by applications in signal processing, the focus of our contribution is to construct a very fast algorithm for the PCST problem that still achieves a provable approximation guarantee. Our overall algorithm runs in time O(dm logn) on a graph with m edges, where all edge...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2016